Skip to content

Various improvements to chat interface#18

Merged
ScriptSmith merged 23 commits intomainfrom
chat-fixes
Mar 22, 2026
Merged

Various improvements to chat interface#18
ScriptSmith merged 23 commits intomainfrom
chat-fixes

Conversation

@ScriptSmith
Copy link
Owner

No description provided.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 21, 2026

Greptile Summary

This PR delivers several quality-of-life improvements to the chat interface: a new compact-mode toggle (hides reasoning sections and tool details), a richer ContentRound component that renders each tool-calling iteration as a structured reasoning→content→tools block, a smarter StreamingStatusIndicator that replaces the plain "Thinking..." label with phase-aware "Thinking" / "Processing" states, and a refactored streaming architecture where the live streaming section is now placed inside the virtualizer (no longer absolutely-positioned outside it). The ToolCallIndicator component is removed and its logic is absorbed into the new indicator and ContentRound.

Key changes:

  • ContentRound (new file): self-contained per-round block with compact-mode awareness, display_artifacts layout support, and local expand/collapse for tool execution.
  • streamingStore: three new actions — pushCompletedRound (resets content/reasoning for the next round), setCompletedRoundToolExecution (attaches a tool round to the last completed entry), and resumeStreaming (re-enables the streaming flag between rounds).
  • useChat.ts / streamWithClientSideTools: replaces the flat accumulatedContent string with an allCompletedRounds array; each round's reasoning, content, and tool execution is tracked independently.
  • chatUIStore: adds compactMode state persisted to localStorage; defaults to true (compact ON) for users who have never set the preference.
  • P1 issue found: allOutputArtifacts in ModelResponseCard iterates both response.completedRounds[i].toolExecution and toolExecutionRounds — but for every multi-round response both are populated (both are returned by streamWithClientSideTools and stored), so output artifacts are added twice. The find semantics in ContentRound prevent double-rendering, but the backing array is incorrect.
  • P2 suggestions: compact mode defaults true for all new/upgrading users (may be intentional but worth confirming); reasoning-phase detection uses strict string equality which can misfire when two rounds have identical reasoning text; modeMetadata badge reads from group.assistantResponses[0] instead of committedResponses[0].

Confidence Score: 3/5

  • Safe to merge after fixing the allOutputArtifacts double-accumulation bug; the P2s are non-blocking.
  • The architectural direction is solid and most of the refactoring is clean. The one P1 issue (artifact double-counting in allOutputArtifacts) is a concrete data-correctness bug that will affect any multi-round tool response once display_artifacts is called — it needs a targeted one-line fix before merge. The P2s (compact mode default, reasoning equality, stale modeMetadata guard) are non-blocking improvements. Prior thread concerns (virtualizer side-effect mutation, setContent cleanup guard, toolExecutionRound index alignment) are addressed in this PR.
  • ui/src/components/MultiModelResponse/MultiModelResponse.tsx (allOutputArtifacts memo) and ui/src/stores/chatUIStore.ts (loadCompactMode default)

Important Files Changed

Filename Overview
ui/src/components/MultiModelResponse/MultiModelResponse.tsx Major refactor: adds compact-mode toggle, replaces TypingIndicator with phase-aware StreamingStatusIndicator, and introduces ContentRound-based multi-round rendering — but allOutputArtifacts memo double-counts artifacts for multi-round responses (both completedRounds and toolExecutionRounds contain the same artifacts).
ui/src/components/ChatMessageList/ChatMessageList.tsx Streaming section moved inside the virtualizer; committedResponses filtering added for regeneration support; modeMetadata mode-badge guard still reads from group.assistantResponses[0] rather than committedResponses[0], risking an orphaned badge when the first response is being regenerated.
ui/src/components/MultiModelResponse/ContentRound.tsx New component that cleanly encapsulates a single reasoning→content→tool-execution block; compact-mode path correctly hides reasoning/tools; display_artifacts layout handled via passed allOutputArtifacts reference.
ui/src/stores/streamingStore.ts Adds pushCompletedRound (resets content/reasoning for next round), setCompletedRoundToolExecution, and resumeStreaming; all three use standard immutable Zustand patterns; resumeStreaming correctly sets both instance and global isStreaming.
ui/src/pages/chat/useChat.ts Replaces accumulatedContent string with allCompletedRounds array; per-round data is pushed to streaming store for live interleaved UI; hasOutputText flag distinguishes reasoning-only rounds cleanly; setContent cleanup guard (iterations > 1) flagged in prior threads remains.
ui/src/stores/chatUIStore.ts Adds compactMode state with localStorage persistence; loadCompactMode defaults to true (compact ON) for any user who has never set the preference — may surprise returning users upgrading from a build without this feature.
ui/src/components/chat-types.ts Adds CompletedRound interface (reasoning, content, toolExecution optional fields) and extends ChatMessage with completedRounds field; clean addition with no conflicts.
ui/src/pages/chat/utils/toolCallParser.ts ToolCall/ToolCallType/ToolCallStatus types inlined here after ToolCallIndicator removal; no functional change to parsing logic.
ui/src/components/ToolExecution/ToolExecutionStep.tsx Adds display_artifacts icon/name mapping and a metadata summary line (count + layout) for the display_artifacts tool; straightforward additive change.

Sequence Diagram

sequenceDiagram
    participant UC as useChat
    participant SS as streamingStore
    participant MMR as ModelResponseCard
    participant CR as ContentRound

    UC->>SS: initStream(storeKey)
    loop Each tool-calling round
        UC->>SS: appendContent / appendReasoningContent (deltas)
        MMR->>SS: reads content + reasoningContent (live)
        MMR->>CR: renders single-round path (no completedRounds yet)
        UC->>SS: completeStream(storeKey) — isStreaming=false
        UC->>SS: pushCompletedRound(roundData) — reset content/reasoning, add to completedRounds
        UC->>SS: resumeStreaming(storeKey) — isStreaming=true
        MMR->>SS: reads completedRounds[n], content="" → switches to multi-round path
        MMR->>CR: renders completed rounds + StreamingStatusIndicator(thinking/processing)
        UC->>SS: startExecutionRound + addExecutions (pending)
        Note over UC: executeTools runs async
        UC->>SS: updateExecution (running → success/failed)
        UC->>SS: setCompletedRoundToolExecution(round) — attaches to last completedRound
        MMR->>CR: toolExecutionRound set on last round
    end
    UC->>SS: completeStream — final
    UC->>SS: pushCompletedRound (final round)
    UC-->>UC: returns {content, completedRounds, toolExecutionRounds}
    UC->>ConvStore: addAssistantResponses(completedRounds, toolExecutionRounds)
    MMR->>CR: committed render — isStreaming=false, all rounds with toolExecution
Loading

Comments Outside Diff (4)

  1. ui/src/components/MultiModelResponse/MultiModelResponse.tsx, line 701-724 (link)

    Artifacts double-counted in allOutputArtifacts

    streamWithClientSideTools always returns both completedRounds (with toolExecution on each round) and toolExecutionRounds (the same rounds, separately). Both are stored in the conversation and passed down as response.completedRounds and response.toolExecutionRounds, which means toolExecutionRounds is never empty when completedRounds is set.

    The current memo loops over both paths and pushes the same output artifacts twice — once from completedRounds[i].toolExecution.executions and once from toolExecutionRounds. The comment "Also check standalone toolExecutionRounds (single-round path)" reflects the intended guard, but the guard is missing: toolExecutionRounds is non-empty for multi-round responses too.

    The find call in ContentRound.displayedArtifacts returns the first match per ID, so rendering itself won't duplicate artifacts, but allOutputArtifacts is larger than necessary and any code path that iterates or counts it (e.g. deduplication logic added later) will see phantom duplicates.

    // Fix: skip toolExecutionRounds when completedRounds is present
    const allOutputArtifacts = useMemo(() => {
      const result: ArtifactType[] = [];
      const rounds = response.completedRounds ?? [];
      if (rounds.length > 0) {
        for (const round of rounds) {
          if (round.toolExecution) {
            for (const execution of round.toolExecution.executions) {
              for (const a of execution.outputArtifacts) {
                if (a.type !== "display_selection") result.push(a);
              }
            }
          }
        }
      } else {
        // Single-round path: fall back to standalone toolExecutionRounds
        for (const round of toolExecutionRounds) {
          for (const execution of round.executions) {
            for (const a of execution.outputArtifacts) {
              if (a.type !== "display_selection") result.push(a);
            }
          }
        }
      }
      return result;
    }, [response.completedRounds, toolExecutionRounds]);
  2. ui/src/stores/chatUIStore.ts, line 353-358 (link)

    Compact mode defaults true for all new users

    localStorage.getItem("hadrian:compactMode") !== "false" returns true whenever the key is absent (first-ever visit) or when localStorage throws. That means every new user starts with compact mode on, which hides reasoning sections and tool execution details by default.

    Both story decorators explicitly call useChatUIStore.setState({ compactMode: false }) to restore full detail, suggesting the canonical "show everything" view is the intended default for review/QA. If the intent is to ship compact as the default, this is fine — but it's worth double-checking whether users who upgrade from a previous release (who have never set the key) are also meant to be flipped to compact on their first load.

  3. ui/src/components/MultiModelResponse/MultiModelResponse.tsx, line 959-963 (link)

    Reasoning detection uses strict string equality — can misfire with identical content across rounds

    !rounds!.some(r => r.reasoning === response.reasoningContent) treats the current reasoning as "in-flight" only when no completed round has exactly the same string. If two consecutive rounds happen to produce identical reasoning text (uncommon but possible with short or repetitive models), the second round's reasoning would be hidden — currentReasoning evaluates to null and the in-flight ContentRound is suppressed until content arrives.

    A more robust approach is to compare object identity or use an index/generation counter rather than the string value, for example:

    const lastRound = rounds[rounds.length - 1];
    const currentReasoning =
      hasReasoning && lastRound?.reasoning !== response.reasoningContent
        ? response.reasoningContent
        : null;

    This correctly handles the common case (last committed round's reasoning will differ from the live streaming reasoning) without being fooled by identical text.

  4. ui/src/components/ChatMessageList/ChatMessageList.tsx, line 113-115 (link)

    Mode indicator modeMetadata read from potentially stale group.assistantResponses

    After this PR, the committed responses shown in the virtualizer are committedResponses (filtered to exclude actively-streaming instances), but the mode-badge guard on line 114 still reads group.assistantResponses[0].modeMetadata:

    {group.assistantResponses[0].modeMetadata?.mode === "routed" && (

    If the first response in assistantResponses is currently being regenerated (and is thus excluded from committedResponses), the badge guard reads a potentially-absent entry while the rendered committedResponses array starts at a different element. This won't crash since assistantResponses is still the full original array, but the mode indicator could appear even when committedResponses is empty (no committed responses to display), resulting in an orphaned routing badge.

    Consider using committedResponses[0]?.modeMetadata instead.

Prompt To Fix All With AI
This is a comment left during a code review.
Path: ui/src/components/MultiModelResponse/MultiModelResponse.tsx
Line: 701-724

Comment:
**Artifacts double-counted in `allOutputArtifacts`**

`streamWithClientSideTools` always returns **both** `completedRounds` (with `toolExecution` on each round) and `toolExecutionRounds` (the same rounds, separately). Both are stored in the conversation and passed down as `response.completedRounds` and `response.toolExecutionRounds`, which means `toolExecutionRounds` is never empty when `completedRounds` is set.

The current memo loops over both paths and pushes the same output artifacts twice — once from `completedRounds[i].toolExecution.executions` and once from `toolExecutionRounds`. The comment "Also check standalone toolExecutionRounds (single-round path)" reflects the intended guard, but the guard is missing: `toolExecutionRounds` is non-empty for multi-round responses too.

The `find` call in `ContentRound.displayedArtifacts` returns the first match per ID, so rendering itself won't duplicate artifacts, but `allOutputArtifacts` is larger than necessary and any code path that iterates or counts it (e.g. deduplication logic added later) will see phantom duplicates.

```ts
// Fix: skip toolExecutionRounds when completedRounds is present
const allOutputArtifacts = useMemo(() => {
  const result: ArtifactType[] = [];
  const rounds = response.completedRounds ?? [];
  if (rounds.length > 0) {
    for (const round of rounds) {
      if (round.toolExecution) {
        for (const execution of round.toolExecution.executions) {
          for (const a of execution.outputArtifacts) {
            if (a.type !== "display_selection") result.push(a);
          }
        }
      }
    }
  } else {
    // Single-round path: fall back to standalone toolExecutionRounds
    for (const round of toolExecutionRounds) {
      for (const execution of round.executions) {
        for (const a of execution.outputArtifacts) {
          if (a.type !== "display_selection") result.push(a);
        }
      }
    }
  }
  return result;
}, [response.completedRounds, toolExecutionRounds]);
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: ui/src/stores/chatUIStore.ts
Line: 353-358

Comment:
**Compact mode defaults `true` for all new users**

`localStorage.getItem("hadrian:compactMode") !== "false"` returns `true` whenever the key is absent (first-ever visit) or when localStorage throws. That means every new user starts with compact mode **on**, which hides reasoning sections and tool execution details by default.

Both story decorators explicitly call `useChatUIStore.setState({ compactMode: false })` to restore full detail, suggesting the canonical "show everything" view is the intended default for review/QA. If the intent is to ship compact as the default, this is fine — but it's worth double-checking whether users who upgrade from a previous release (who have never set the key) are also meant to be flipped to compact on their first load.

```suggestion
function loadCompactMode(): boolean {
  try {
    const stored = localStorage.getItem("hadrian:compactMode");
    if (stored === null) return false; // default: full view for new users
    return stored !== "false";
  } catch {
    return false;
  }
}
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: ui/src/components/MultiModelResponse/MultiModelResponse.tsx
Line: 959-963

Comment:
**Reasoning detection uses strict string equality — can misfire with identical content across rounds**

`!rounds!.some(r => r.reasoning === response.reasoningContent)` treats the current reasoning as "in-flight" only when no completed round has exactly the same string. If two consecutive rounds happen to produce identical reasoning text (uncommon but possible with short or repetitive models), the second round's reasoning would be hidden — `currentReasoning` evaluates to `null` and the in-flight `ContentRound` is suppressed until content arrives.

A more robust approach is to compare object identity or use an index/generation counter rather than the string value, for example:

```ts
const lastRound = rounds[rounds.length - 1];
const currentReasoning =
  hasReasoning && lastRound?.reasoning !== response.reasoningContent
    ? response.reasoningContent
    : null;
```

This correctly handles the common case (last committed round's reasoning will differ from the live streaming reasoning) without being fooled by identical text.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: ui/src/components/ChatMessageList/ChatMessageList.tsx
Line: 113-115

Comment:
**Mode indicator `modeMetadata` read from potentially stale `group.assistantResponses`**

After this PR, the committed responses shown in the virtualizer are `committedResponses` (filtered to exclude actively-streaming instances), but the mode-badge guard on line 114 still reads `group.assistantResponses[0].modeMetadata`:

```tsx
{group.assistantResponses[0].modeMetadata?.mode === "routed" && (
```

If the first response in `assistantResponses` is currently being regenerated (and is thus excluded from `committedResponses`), the badge guard reads a potentially-absent entry while the rendered `committedResponses` array starts at a different element. This won't crash since `assistantResponses` is still the full original array, but the mode indicator could appear even when `committedResponses` is empty (no committed responses to display), resulting in an orphaned routing badge.

Consider using `committedResponses[0]?.modeMetadata` instead.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (4): Last reviewed commit: "Review fixes" | Re-trigger Greptile

@ScriptSmith
Copy link
Owner Author

@greptile-apps

@ScriptSmith
Copy link
Owner Author

@greptile-apps

@ScriptSmith
Copy link
Owner Author

@greptile-apps

@ScriptSmith ScriptSmith merged commit 1ad51e6 into main Mar 22, 2026
18 of 19 checks passed
@ScriptSmith ScriptSmith deleted the chat-fixes branch March 22, 2026 11:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant